We describe in this paper our experience of developing a large-scale, highly distributed multi-agent system using wireless-networked sensors. We provide solutions to the problems of localization (position estimation) and dynamic, real-time mobile object tracking, which we call PET problems for short, using wireless sensor networks. We propose system architectures and a set of distributed algorithms for organizing and scheduling cooperative computation in distributed environments, as well as distributed algorithms for localization and real-time object tracking. Based on these distributed algorithms, we develop and implement a hardware system and software simulator for the PET problems. Finally, we present some experimental results on distance measurement accuracy using radio signal strengths of the wireless sensors and discuss future work.
Guoqiang ZHONG Satoshi AMAMIYA Ken'ichi TAKAHASHI Tsunenori MINE Makoto AMAMIYA
Agent-based computing has been advocated by many researchers and software developers as a promising and innovative way to model, design and implement complex Web-related applications. KODAMA (Kyushu university Open & Distributed Autonomous MultiAgent) project, which is described in this paper, is our endeavour to advance both the technology and engineering of well-known multiagent systems. In particular, we have noted that software agents might be the potential solution to many problems faced by today's Web. However, building a high-quality, large-scale multiagent system, which can operate in open environments, is a great challenge. So far, we have devoted a lot of effort to design and implement a generic agent architecture, a hierarchical agent community structure, and an independent network communication middleware. To ensure that KODAMA can be used to create Web-agent applications, its network communication performance and a prototype distributed database retrieval system have been tested. The result shows that KODAMA is suitable for developing network-aware applications.
Agent technology is widely recognized as a new paradigm for the design of concurrent software and systems. The aim of this paper is to give a mathematical foundation for the design and the analysis of multi-agent systems by means of a Petri-net-based model. The proposed model, called PN2, is based on place/transition nets (P/T nets), which is one of the simplest classes of Petri nets. The main difference of PN2's from P/T nets is that each token, representing an agent, is also a P/T net. PN2's are sufficiently simple for the mathematical analysis, such as invariant analysis, but have enough modeling power.
Tatsuya YAMAZAKI Masakatsu KOSUGA Nagao OGINO Jun MATSUDA
For distributed multimedia applications, the development of adaptive QoS (quality of service) management mechanisms is needed to guarantee various and changeable end-to-end QoS requirements. In this paper, we propose an adaptive QoS management framework based on multi-agent systems. In this framework, QoS management mechanisms are divided into two phases, the flow establishment and renegotiation phase and the media-transfer phase. An adaptation to system resource changes and various user requirements is accomplished by direct or indirect collaborations of the agents in each phase. In the flow establishment and renegotiation phase, application agents determine optimal resource allocation with QoS negotiations to maximize the total users' utility. In the media-transfer phase, stream agents collaborate to adjust each stream QoS reactively. In addition, personal agents help a novice user to specify stream QoS without any a priori knowledge of QoS. To make the interworking of agents tractable, a QoS mapping mechanism is needed to translate the QoS parameters from level to level, since the expression of QoS differs from level to level. As an example of multimedia application based on the proposed framework, a one-way video system is designed. The experimental results of computer simulation show the validity of the proposed framework.
K. Suzanne BARBER Ryan M. McKAY Anuj GOEL David C. HAN Joonoo KIM Tse-Hsin LIU Cheryl E. MARTIN
The need for responsive, flexible agents is pervasive in many application domains due to their complex, dynamic, and uncertain nature. Dynamic Adaptive Autonomy allows Sensible Agents to reorganize themselves during system operation to solve different problems in the face of these complex and dynamic environments. This paper presents both functional and implementation architectures for Sensible Agent systems. The functional architecture supports concepts from the distributed computing community by separating internal agent functionality into a discrete set of modules whose interactions are formally specified using the Interface Definition Language (IDL) from the Common Object Request Broker Architecture (CORBA). These four modules are: (1) Perspective Modeler--which contains the agent's explicit model of its local, subjective view of the world, (2) Autonomy Reasoner--determines the appropriate decision-making framework for each of the agent's goals, (3) Action Planner--interprets domain-specific goals, plans to achieve these goals and executes the generated plans, and (4) Conflict Resolution Advisor--identifies, classifies, and recommends possible solution strategies for resolving conflicts between this agent and other agents. The implementation architecture has been realized in a testbed that promotes (1) language and platform independence, (2) parallel development, (3) rapid integration of evolving representations and algorithms implementing agent functionality, (4) repeatable experimentation and testing, (5) environment and agent visualization, and (6) inter-domain application portability. The testbed uses the Inter-Language Unification (ILU) ORB from Xerox to provide the CORBA layer of inter-module and inter-agent communication. A three-dimensional visualization of the domain is provided with a CORBA-connected Virtual Reality Modeling Language (VRML) model while low-level data collection is accomplished using a CORBA-connected Java application. The combination of a distributed functional architecture with a distributed implementation architecture provides a high level of flexibility, visualization ability and experimental fidelity for evaluating the performance of Sensible Agents in complex, dynamic and uncertain environments.
Sachiyo ARAI Kazuteru MIYAZAKI Shigenobu KOBAYASHI
This paper describes the Profit-Sharing, a reinforcement learning approach which can be used to design a coordination strategy in a multi-agent system, and demonstrates its effectiveness empirically within a coil-yard of steel manufacture. This domain consists of multiple cranes which are operated asynchronously but need coordination to adjust their initial plans of task execution to avoid the collisions, which would be caused by resource limitation. This problem is beyond the classical expert's hand-coding methods as well as the mathematical analysis, because of scattered information, stochastically generated tasks, and moreover, the difficulties to transact tasks on schedule. In recent few years, many applications of reinforcement learning algorithms based on Dynamic Programming (DP), such as Q-learning, Temporal Difference method, are introduced. They promise optimal performance of the agent in the Markov decision processes (MDPs), but in the non-MDPs, such as multi-agent domain, there is no guarantee for the convergence of agent's policy. On the other hand, Profit-Sharing is contrastive with DP-based ones, could guarantee the convergence to the rational policy, which means that agent could reach one of the desirable status, even in non-MDPs, where agents learn concurrently and competitively. Therefore, we embedded Profit-Sharing into the operator of crane to acquire cooperative rules in such a dynamic domain, and introduce its applicability to the realistic world by means of comparing with RAP (Reactive Action Planner) model, encoded by expert's knowledge.
Hidenori KAWAMURA Masahito YAMAMOTO Keiji SUZUKI Azuma OHUCHI
Recently, researchers in various fields have shown interest in the behavior of creatures from the viewpoint of adaptiveness and flexibility. Ants, known as social insects, exhibit collective behavior in performing tasks that can not be carried out by an individual ant. In ant colonies, chemical substances, called pheromones, are used as a way to communicate important information on global behavior. For example, ants looking for food lay the way back to their nest with a specific type of pheromone. Other ants can follow the pheromone trail and find their way to baits efficiently. In 1991, Colorni et al. proposed the ant algorithm for Traveling Salesman Problems (TSPs) by using the analogy of such foraging behavior and pheromone communication. In the ant algorithm, there is a colony consisting of many simple ant agents that continuously visit TSP cities with opinions to prefer subtours connecting near cities and they lay strong pheromones. The ants completing their tours lay pheromones of various intensities with passed subtours according to distances. Namely, subtours in TSP tourns that have the possibility of being better tend to have strong pheromones, so the ant agents specify good regions in the search space by using this positive feedback mechanism. In this paper, we propose a multiple ant colonies algorithm that has been extended from the ant algorithm. This algorithm has several ant colonies for solving a TSP, while the original has only a single ant colony. Moreover, two kinds of pheromone effects, positive and negative pheromone effects, are introduced as the colony-level interactions. As a result of colony-level interactions, the colonies can exchange good schemata for solving a problem and can maintain their own variation in the search process. The proposed algorithm shows better performance than the original algorithm with almost the same agent strategy used in both algorithms except for the introduction of colony-level interactions.
Tetsuya YOSHIDA Koichi HORI Shinichi NAKASUKA
This paper proposes a new method to improve cooperation in concurrent systems within the framework of Multi-Agent Systems (MAS) by utilizing reinforcement learning. When subsystems work independently and concurrently, achieving appropriate cooperation among them is important to improve the effectiveness of the overall system. Treating subsystems as agents makes it easy to explicitly deal with the interactions among them since they can be modeled naturally as communication among agents with intended information. In our approach agents try to learn the appropriate balance between exploration and exploitation via reward, which is important in distributed and concurrent problem solving in general. By focusing on how to give reward in reinforcement learning, not the learning equation, two kinds of reward are defined in the context of cooperation between agents, in contrast to reinforcement learning within the framework of single agent. In our approach reward for insistence by individual agent contributes to facilitating exploration and reward for concession to other agents contributes to facilitating exploitation. Our cooperation method was examined through experiments on the design of micro satellites and the result showed that it was effective to some extent to facilitate cooperation among agents by letting agents themselves learn the appropriate balance between insistence and concession. The result also suggested the possibility of utilizing the relative magnitude of these rewards as a new control parameter in MAS to control the overall behavior of MAS.
Hidenori KAWAMURA Masahito YAMAMOTO Tamotsu MITAMURA Keiji SUZUKI Azuma OHUCHI
In this paper, we propose a new cooperative search algorithm based on pheromone communication for solving the Vehicle Routing Problems. In this algorithm, multi-agents can partition the problem cooperatively and search partial solutions independently using pheromone communication, which mimics the communication method of real ants. Through some computer experiments the cooperative search of multi-agents is confirmed.
Tetsuya YOSHIDA Koichi HORI Shinichi NAKASUKA
This paper proposes a new method to improve cooperation in concurrent systems within the framework of Multi-Agent Systems (MAS). Since subsystems work concurrently, achieving appropriate cooperation among them is important to improve the effectiveness of the overall system. When subsystems are modeled as agents, it is easy to explicitly deal with the interactions among them since they can be modeled naturally as communication among agents with intended information. Contrary to previous approaches which provided the syntax of communication protocols without semantics, we focus on the semantics of cooperation in MAS and aim at allowing agents to exploit the communicated information for cooperation. This is attempted by utilizing more coarse-grained communication based on the different perspective for the balance between formality and richness of communication contents so that each piece of communication contents can convey more meaningful information in application domains. In our approach agents cooperate each other by giving feedbacks based on the metaphor of explanation which is widely used in human interactions, in contrast to previous approaches which use direct orders given by the leader based on the pre-defined cooperation strategies. Agents show the difference between the proposal and counter-proposals for it, which are constructed with respect to the former and given as the feedbacks in the easily understandable terms for the receiver. From the comparison of proposals agents retrieve the information on which parts are agreed and disagreed by the relevant agents, and reflect the analysis in their following behavior. Furthermore, communication contents are annotated by agents to indicate the degree of importance in decision making for them, which contributes to making explanations or feedbacks more understandable. Our cooperation method was examined through experiments on the design of micro satellites and the result showed that it was effective to some extent to facilitate cooperation among agents.
Keiji GYOHTEN Tomoko SUMIYA Noboru BABAGUCHI Koh KAKUSHO Tadahiro KITAHASHI
This paper describes COCE (COordinative Character Extractor), a method for extracting printed Japanese characters and their character strings from all sorts of document images. COCE is based on a multi-agent system where each agent tries to find a character string and extracts the characters in it. For the adaptability, the agents are allowed to look after arbitrary parts of documents and extract the characters using only the knowledge independent of the layouts. Moreover, the agents check and correct their results sometimes with the help of the other agents. From experimental results, we have verified the effectiveness of our approach.